我们可以看到这一切吗?我们知道这一切吗?这些是我们当代社会中人类提出的问题,以评估我们解决问题的趋势。最近的研究探索了对象检测中的几种模型。但是,大多数人未能满足对客观性和预测准确性的需求,尤其是在发展中和发达国家中。因此,几种全球安全威胁需要开发有效解决这些问题的方法。本文提出了一种被称为智能监视系统(3S)的网络物理系统的对象检测模型。这项研究提出了一种2阶段的方法,突出了Yolo V3深度学习体系结构在实时和视觉对象检测中的优势。该研究实施了一种转移学习方法,以减少培训时间和计算资源。用于培训模型的数据集是MS COCO数据集,其中包含328,000个注释的图像实例。实施了深度学习技术,例如预处理,数据管道调查和检测,以提高效率。与其他新型研究模型相比,该模型的结果在检测监视镜头中的野生物体方面表现出色。记录了99.71%的精度,改进的地图为61.5。
translated by 谷歌翻译
大数据一词是为了指代传统数据处理技术无法处理的大量数据。大数据仍然是一个新颖的概念,在以下文献中,我们打算以明显的方式详细说明它。它从主题本身的概念及其属性以及处理该主题的两种一般方法开始。大数据为教育机构提供了一个机会,可以从战略上利用其信息技术资源来提高教育质量,指导学生提高完成率并改善学生的毅力和成果。本文探讨了与教育机构有关的大数据的属性,研究了影响大数据和分析机构中采用大数据的因素,并试图建立阻碍在高等教育机构中使用大数据的限制因素。在进行这项研究时采用了调查研究设计,问卷是用于数据收集的仪器。
translated by 谷歌翻译
在大多数领域,从人工智能和游戏到人类计算机互动(HCI)和心理学,面部表情识别是一个重要的研究主题。本文提出了一个用于面部表达识别的混合模型,该模型包括深度卷积神经网络(DCNN)和HAAR级联深度学习体系结构。目的是将实时和数字面部图像分类为所考虑的七个面部情感类别之一。这项研究中使用的DCNN具有更多的卷积层,恢复激活功能以及多个内核,以增强滤波深度和面部特征提取。此外,HAAR级联模型还相互用于检测实时图像和视频帧中的面部特征。来自Kaggle存储库(FER-2013)的灰度图像,然后利用图形处理单元(GPU)计算以加快培训和验证过程。预处理和数据增强技术用于提高培训效率和分类性能。实验结果表明,与最先进的实验和研究相比,分类性能有了显着改善的分类性能。同样,与其他常规模型相比,本文验证了所提出的体系结构在分类性能方面表现出色,提高了6%,总计高达70%的精度,并且执行时间较小,为2098.8S。
translated by 谷歌翻译
Recent object detection models for infrared (IR) imagery are based upon deep neural networks (DNNs) and require large amounts of labeled training imagery. However, publicly-available datasets that can be used for such training are limited in their size and diversity. To address this problem, we explore cross-modal style transfer (CMST) to leverage large and diverse color imagery datasets so that they can be used to train DNN-based IR image based object detectors. We evaluate six contemporary stylization methods on four publicly-available IR datasets - the first comparison of its kind - and find that CMST is highly effective for DNN-based detectors. Surprisingly, we find that existing data-driven methods are outperformed by a simple grayscale stylization (an average of the color channels). Our analysis reveals that existing data-driven methods are either too simplistic or introduce significant artifacts into the imagery. To overcome these limitations, we propose meta-learning style transfer (MLST), which learns a stylization by composing and tuning well-behaved analytic functions. We find that MLST leads to more complex stylizations without introducing significant image artifacts and achieves the best overall detector performance on our benchmark datasets.
translated by 谷歌翻译
Large language models (LLMs) have shown impressive results across a variety of tasks while requiring little or no direct supervision. Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios. We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in this setting. We propose and study Attributed QA as a key first step in the development of attributed LLMs. We develop a reproducable evaluation framework for the task, using human annotations as a gold standard and a correlated automatic metric that we show is suitable for development settings. We describe and benchmark a broad set of architectures for the task. Our contributions give some concrete answers to two key questions (How to measure attribution?, and How well do current state-of-the-art methods perform on attribution?), and give some hints as to how to address a third key question (How to build LLMs with attribution?).
translated by 谷歌翻译
Transformers have proved to be very effective for visual recognition tasks. In particular, vision transformers construct compressed global representations through self-attention and learnable class tokens. Multi-resolution transformers have shown recent successes in semantic segmentation but can only capture local interactions in high-resolution feature maps. This paper extends the notion of global tokens to build GLobal Attention Multi-resolution (GLAM) transformers. GLAM is a generic module that can be integrated into most existing transformer backbones. GLAM includes learnable global tokens, which unlike previous methods can model interactions between all image regions, and extracts powerful representations during training. Extensive experiments show that GLAM-Swin or GLAM-Swin-UNet exhibit substantially better performances than their vanilla counterparts on ADE20K and Cityscapes. Moreover, GLAM can be used to segment large 3D medical images, and GLAM-nnFormer achieves new state-of-the-art performance on the BCV dataset.
translated by 谷歌翻译
In this paper, we present the Multi-view Extended Videos with Identities (MEVID) dataset for large-scale, video person re-identification (ReID) in the wild. To our knowledge, MEVID represents the most-varied video person ReID dataset, spanning an extensive indoor and outdoor environment across nine unique dates in a 73-day window, various camera viewpoints, and entity clothing changes. Specifically, we label the identities of 158 unique people wearing 598 outfits taken from 8, 092 tracklets, average length of about 590 frames, seen in 33 camera views from the very large-scale MEVA person activities dataset. While other datasets have more unique identities, MEVID emphasizes a richer set of information about each individual, such as: 4 outfits/identity vs. 2 outfits/identity in CCVID, 33 viewpoints across 17 locations vs. 6 in 5 simulated locations for MTA, and 10 million frames vs. 3 million for LS-VID. Being based on the MEVA video dataset, we also inherit data that is intentionally demographically balanced to the continental United States. To accelerate the annotation process, we developed a semi-automatic annotation framework and GUI that combines state-of-the-art real-time models for object detection, pose estimation, person ReID, and multi-object tracking. We evaluate several state-of-the-art methods on MEVID challenge problems and comprehensively quantify their robustness in terms of changes of outfit, scale, and background location. Our quantitative analysis on the realistic, unique aspects of MEVID shows that there are significant remaining challenges in video person ReID and indicates important directions for future research.
translated by 谷歌翻译
In collider-based particle and nuclear physics experiments, data are produced at such extreme rates that only a subset can be recorded for later analysis. Typically, algorithms select individual collision events for preservation and store the complete experimental response. A relatively new alternative strategy is to additionally save a partial record for a larger subset of events, allowing for later specific analysis of a larger fraction of events. We propose a strategy that bridges these paradigms by compressing entire events for generic offline analysis but at a lower fidelity. An optimal-transport-based $\beta$ Variational Autoencoder (VAE) is used to automate the compression and the hyperparameter $\beta$ controls the compression fidelity. We introduce a new approach for multi-objective learning functions by simultaneously learning a VAE appropriate for all values of $\beta$ through parameterization. We present an example use case, a di-muon resonance search at the Large Hadron Collider (LHC), where we show that simulated data compressed by our $\beta$-VAE has enough fidelity to distinguish distinct signal morphologies.
translated by 谷歌翻译
跌倒是致命和非致命伤害的主要原因,尤其是对于老年人。身体内部原因(例如疾病)或外部原因(例如主动或被动扰动)可能导致不平衡。主动扰动是将外力施加到人的结果,而被动扰动是由于人类运动与静态障碍相互作用而导致的。这项工作提出了一个指标,该指标允许监视躯干及其与主动和被动扰动的相关性。我们表明,躯干摇摆的巨大变化可以与主动扰动密切相关。我们还表明,通过调节过去的轨迹,躯干运动和周围场景的预期路径和躯干摇摆,我们可以合理地预测躯干摇摆的未来路径和预期变化。这将有直接的预防应用程序。结果表明,躯干摇摆与扰动密切相关。而且我们的模型能够利用全景图中介绍的视觉提示并相应地调节预测。
translated by 谷歌翻译
近年来,合成(或模拟)数据用于培训机器学习模型已迅速增长。通常,合成数据可以比其现实世界中的对应物更快,更便宜。但是,使用合成图像的一个挑战是场景设计:例如,内容及其特征和空间布置的选择。为了有效,该设计不仅必须现实,而且适合目标域,而目标域(通过假设)是未标记的。在这项工作中,我们提出了一种方法,可以自动根据未标记的现实世界图像选择合成图像的设计。我们的方法被称为神经 - 异位元模拟(NAM),建立在开创性的元模拟方法上。与当前的最新方法相反,我们的方法可以在离线后进行预训练,然后为新目标图像提供快速的设计推断。使用合成和现实世界中的问题,我们表明,NAMS不符合符合内域和室外目标成像的合成设计,并且具有NAMS设计的图像的训练分割模型与NA \ \ na \'相比,结果均优异。 IVE随机设计和最先进的元模拟方法。
translated by 谷歌翻译